180 research outputs found
STAMP: SMTP server topological analysis by message headers parsing
This paper presents STAMP: a tool to analyse SMTP servers overlay topology. STAMP builds a weighted and oriented graph from an email database, an email log or just a simple email header allowing a post-analysis of the SMTP overlay structure and the identification of the paths used by an email. The objective of this tool is twofold. STAMP allows to automatically perform an analysis of the SMTP topology for debugging (e.g, message delay, emails loop, ...) and for metrology purposes. While several traceroute-like measurements projects try to map the Internet, to the best of our knowledge, no tool allows to drive an analysis of the SMTP overlay network. Thus, the goal of the resulting graph is to develop methods (from graph theory, statistical analysis, ...) to identify relaying problems. We aim to explore the impact of IP network problems over emails delivery (and respectively: emails’ traffic over IP networks) in conjunction with IP measurements driven synchronously. In the present paper, we introduce the design and the measurement methodology of the STAMP software and as second contribution, bring out to the networking community the tool and some measurements databases
Analytical Model of TCP Relentless Congestion Control
We introduce a model of the Relentless Congestion Control proposed by Matt
Mathis. Relentless Congestion Control (RCC) is a modification of the AIMD
(Additive Increase Multiplicative Decrease) congestion control which consists
in decreasing the TCP congestion window by the number of lost segments instead
of halving it. Despite some on-going discussions at the ICCRG IRTF-group, this
congestion control has, to the best of our knowledge, never been modeled. In
this paper, we provide an analytical model of this novel congestion control and
propose an implementation of RCC for the commonly-used network simulator ns-2.
We also improve RCC with the addition of a loss retransmission detection scheme
(based on SACK+) to prevent RTO caused by a loss of a retransmission and called
this new version RCC+. The proposed models describe both the original RCC
algorithm and RCC+ improvement and would allow to better assess the impact of
this new congestion control scheme over the network traffic.Comment: Extended version of the one presented at 6th International Workshop
on Verification and Evaluation of Computer and Communication Systems (Vecos
2012
Rethinking reliability for long-delay networks
Delay Tolerant Networking (DTN) is currently an open research area following the interest of space companies in the deployment of Internet protocols for the space Internet. Thus, these last years have seen an increase in the number of DTN protocol proposals such as Saratoga or LTP-T. However, the goal of these protocols are more to send much error-free data during a short contact time rather than operating to a strictly speaking reliable data transfer. Beside this, several research work have proposed efficient acknowledgment schemes based on the SNACK mechanism. However, these acknowledgement strategies are not compliant with the DTN protocol principle. In this paper, we propose a novel reliability mechanism with an implicit acknowledgment strategy that could be used either within these new DTN proposals or in the context of multicast transport protocols. This proposal is based on a new erasure coding concept specifically designed to operate efficient reliable transfer over bi-directional links
ECN verbose mode: a statistical method for network path congestion estimation
This article introduces a simple and effective methodology to determine the
level of congestion in a network with an ECN-like marking scheme. The purpose
of the ECN bit is to notify TCP sources of an imminent congestion in order to
react before losses occur. However, ECN is a binary indicator which does not
reflect the congestion level (i.e. the percentage of queued packets) of the
bottleneck, thus preventing any adapted reaction. In this study, we use a
counter in place of the traditional ECN marking scheme to assess the number of
times a packet has crossed a congested router. Thanks to this simple counter,
we drive a statistical analysis to accurately estimate the congestion level of
each router on a network path. We detail in this paper an analytical method
validated by some preliminary simulations which demonstrate the feasibility and
the accuracy of the concept proposed. We conclude this paper with possible
applications and expected future work
Managing network congestion with a Kohonen-based RED queue
The behaviour of the TCP AIMD algorithm is known to cause queue length
oscillations when congestion occurs at a router output link. Indeed, due to
these queueing variations, end-to-end applications experience large delay
jitter. Many studies have proposed efficient Active Queue Management (AQM)
mechanisms in order to reduce queue oscillations and stabilize the queue
length. These AQM are mostly improvements of the Random Early Detection (RED)
model. Unfortunately, these enhancements do not react in a similar manner for
various network conditions and are strongly sensitive to their initial setting
parameters. Although this paper proposes a solution to overcome the
difficulties of setting these parameters by using a Kohonen neural network
model, another goal of this study is to investigate whether cognitive
intelligence could be placed in the core network to solve such stability
problem. In our context, we use results from the neural network area to
demonstrate that our proposal, named Kohonen-RED (KRED), enables a stable queue
length without complex parameters setting and passive measurements.Comment: 8 pages, 9 figure
Safetynet version 2, a packet error recovery architecture for vertical handoffs
Mobile devices are connecting to the Internet through an increasingly heterogeneous network environment. This connectivity via multiple types of wireless networks allows the mobile devices to take advantage of the high speed and the low cost of wireless local area networks and the large coverage of wireless wide area networks. To maximize the benefits from these complementing characteristics, the mobile devices need to be able to switch seamlessly between the different network types. However, the switch between the technologies, also known as a vertical handoff, often results in significant packet loss and degradation of connectivity due to handoff delay and also increased packet loss rate on the border of the coverage area of the networks. In our previous work, we have proposed an inter technology mobility management architecture which addresses the packet losses using selective resending of packets lost during the handoff period. In this paper, we extend the architecture to address packet losses due to wireless errors more efficiently by taking advantage of erasure codes to form redundancy packets. We propose to send these redundancy packets over both links. We show that this proposal reduces both the chances of packet loss and the buffering requirements of the original Safetynet scheme
Optimization of TFRC loss history initialization
This letter deals with the initialization of the loss
history structure in the TFRC (TCP-Friendly Rate Control)
mechanism. This initialization occurs after the detection of the first loss event after every slowstart phase. The loss history is crucial for the algorithm since it returns the packet loss rate estimation. This estimation is used in the TFRC equation to compute the sending rate. In this letter, we propose a new method to compute the packet loss rate which is more computationally efficient and remains as accurate as the classical commonly used method. The motivation of this work is to reduce the computation
time and formulate a unified computation scheme. This method is based on the Newton’s algorithm issued from numerical analysis of the TCP throughput equation. This proposal is evaluated analytically and the results show a significant improvement in terms of the computation time
Towards sender-based TFRC
Pervasive communications are increasingly sent over mobile devices and personal digital assistants. This trend has
been observed during the last football world cup where cellular phones service providers have measured a significant increase in multimedia traffic. To better carry multimedia traffic, the IETF standardized a new TCP Friendly Rate Control (TFRC) protocol. However, the current receiver-based TFRC design is not well suited to resource limited end systems. We propose a scheme to shift resource allocation and computation to the sender. This sender based approach led us to develop a new algorithm for loss notification and loss rate computation. We demonstrate the gain obtained in terms of memory requirements and CPU processing compared to the current design. Moreover this shifting solves security issues raised by classical TFRC implementations. We have implemented this new sender-based TFRC, named TFRC_light, and conducted measurements under real world conditions
- …